🛠️ All DevTools
Showing 341–360 of 4289 tools
Last Updated
April 23, 2026 at 12:00 PM
Show HN: Anvil – Desktop App for Spec Driven Development
Show HN (score: 5)[Other] Show HN: Anvil – Desktop App for Spec Driven Development Very excited to share Anvil. I built Anvil to take back control when working with parallel coding agents. It comes with one click worktree isolation, and first class spec support.<p>Claude Code and similar coding TUIs are very eager to get into writing code, even before their human baby sitter fully understands the implication of what they are about to build.<p>The core insight with Anvil is that it is much easier to write high quality code which matches the author's intent after iterating on an external plan with your agent.<p>Align on the architecture, implementation, and verification strategy in a markdown file, then execution is pretty straightforward.<p>This is not a new concept, but the user experience within TUI apps for this workflow is pretty shit. Claude creates non-semantic plan names like "aquamarine-owl" that are trapped within a single agent context. Spinning up multiple agents to check on different aspects of a plan is annoying and slow, and managing terminal tabs is pure hell.<p>So I built anvil, this is a fully open source (MIT license) project.
Show HN: Open-Source Animal Crossing–Style UI for Claude Code Agents
Hacker News (score: 11)[Other] Show HN: Open-Source Animal Crossing–Style UI for Claude Code Agents We posted here on Monday and got some great feedback. We’ve implemented a few of the most requested updates:<p>- iMessage channel support (agents can text people and you can text agents) Other channels are simple to extend. - A built-in browser (agents can navigate and interact with websites) - Scheduling (run tasks on a timer / cron/ in the future) - Built in tunneling so that the agents can share local stuff with you over the internet - More robust MCP and Skills support so anyone can extend it - Auto approval for agent requests<p>If you didn’t see the original:<p>Outworked is a desktop app where Claude Code agents work as a small “team.” You give it a goal, and an orchestrator breaks it into tasks and assigns them across agents.<p>Agents can run in parallel, talk to each other, write code, and now also browse the web and send messages.<p>It runs locally and plugs into your existing Claude Code setup.<p>Would love to hear what we should build next. Thanks again!
Yeachan-Heo/oh-my-claudecode
GitHub Trending[DevOps] Teams-first Multi-agent orchestration for Claude Code
Show HN: LLM-Gateway – Zero-Trust LLM Gateway
Show HN (score: 6)[DevOps] Show HN: LLM-Gateway – Zero-Trust LLM Gateway I built an OpenAI-compatible LLM gateway that routes requests to OpenAI, Anthropic, Ollama, vLLM, llama-server, SGLang... anything that speaks /v1/chat/completions. Single Go binary, one YAML config file, no infrastructure.<p>It does the things you'd expect from this kind of gateway... semantic routing via a three-layer cascade (keyword heuristics, embedding similarity, LLM classifier) that picks the best model when clients omit the model field, weighted round-robin load balancing across local inference servers with health checks and failover.<p>The part I think is most interesting is the network layer. The gateway and backends communicate over zrok/OpenZiti overlay networks... reach a GPU box behind NAT, expose the gateway to clients, put components anywhere with internet connectivity behind firewalls... no port forwarding, no VPN. Zero-trust in both directions. Most LLM proxies solve the API translation problem. This one also solves the network problem.<p>Apache 2.0. <a href="https://github.com/openziti/llm-gateway" rel="nofollow">https://github.com/openziti/llm-gateway</a><p>I work for NetFoundry, which sponsors the OpenZiti project this is built on.
Show HN: Forkrun – NUMA-aware shell parallelizer (50×–400× faster than parallel)
Hacker News (score: 57)[Other] Show HN: Forkrun – NUMA-aware shell parallelizer (50×–400× faster than parallel) forkrun is the culmination of a 10-year-long journey focused on "how to make shell parallelization fast". What started as a standard "fork jobs in a loop" has turned into a lock-free, CAS-retry-loop-free, SIMD-accelerated, self-tuning, NUMA aware shell-based stream parallelization engine that is (mostly) a drop-in replacement for xargs -P and GNU parallel.<p>On my 14-core/28-thread i9-7940x, forkrun achieves:<p>* 200,000+ batch dispatches/sec (vs ~500 for GNU Parallel)<p>* ~95–99% CPU utilization across all 28 logical cores, even when the workload is non-existant (bash no-ops / `:`) (vs ~6% for GNU Parallel). These benchmarks are intentionally worst-case (near-zero work per task) because they measure the capability of the parallelization framework itself, not how much work an external tool can do.<p>* Typically 50×–400× faster on real high-frequency low-latency workloads (vs GNU Parallel)<p>A few of the techniques that make this possible:<p>* Born-local NUMA: stdin is splice()'d into a shared memfd, then pages are placed on the target NUMA node via set_mempolicy(MPOL_BIND) before any worker touches them, making the memfd NUMA-spliced. Each numa node only claims work that is <i>already</i> born-local on its node. Stealing from other nodes is permitted under some conditions when no local work exists.<p>* SIMD scanning: per-node indexers/scanners use AVX2/NEON to find line boundaries (delimiters) at speeds approaching memory bandwidth, and publish byte-offsets and line-counts into per-node lock-free rings.<p>* Lock-free claiming: workers claim batches with a single atomic_fetch_add — no locks, no CAS retry loops; contention is reduced to a single atomic on one cache line.<p>* Memory management: a background thread uses fallocate(PUNCH_HOLE) to reclaim space without breaking the logical offset system.<p>…and that’s just the surface. The implementation uses many additional systems-level techniques (phase-aware tail handling, adaptive batching, early-flush detection, etc.) to eliminate overhead, increase throughput and reduce latency at every stage.<p>In its fastest (-b) mode (fixed-size batches, minimal processing), it can exceed 1B lines/sec.<p>forkrun ships as a single bash file with an embedded, self-extracting C extension — no Perl, no Python, no install, full native support for parallelizing arbitrary shell functions. The binary is built in public GitHub Actions so you can trace it back to CI (see the GitHub "Blame" on the line containing the base64 embeddings). Trying it is literally two commands:<p><pre><code> . frun.bash frun shell_func_or_cmd < inputs </code></pre> For benchmarking scripts and results, see the BENCHMARKS dir in the GitHub repo<p>For an architecture deep-dive, see the DOCS dir in the GitHub repo<p>Happy to answer questions.
FreeCAD/FreeCAD
GitHub Trending[Other] Official source code of FreeCAD, a free and opensource multiplatform 3D parametric modeler.
Show HN: Grafana TUI – Browse Grafana dashboards in the terminal
Show HN (score: 11)[Monitoring/Observability] Show HN: Grafana TUI – Browse Grafana dashboards in the terminal I built a terminal UI for browsing Grafana dashboards. It connects to any Grafana instance and lets you explore dashboards without leaving the terminal.<p>It renders the most common panel types (time series, bar charts, gauges, heatmaps etc.). You can change the time range, set dashboard variables and filter series.<p>I built this because I spend most of my day in the terminal and wanted a quick way to glance at dashboards without switching to the browser. It's not perfect by any means, but it's a nifty and useful tool.<p>Built with Go, Bubble Tea, ntcharts, and Claude (of course). You can install it via Homebrew:<p><pre><code> brew install lovromazgon/tap/grafana-tui </code></pre> ... and try it out against Grafana's public playground:<p><pre><code> grafana-tui --url https://play.grafana.org</code></pre>
Ninja is a small build system with a focus on speed
Hacker News (score: 100)[Build/Deploy] Ninja is a small build system with a focus on speed
Telnyx package compromised on PyPI
Hacker News (score: 12)[Other] Telnyx package compromised on PyPI <a href="https://github.com/team-telnyx/telnyx-python/issues/235" rel="nofollow">https://github.com/team-telnyx/telnyx-python/issues/235</a><p><a href="https://www.aikido.dev/blog/telnyx-pypi-compromised-teampcp-canisterworm" rel="nofollow">https://www.aikido.dev/blog/telnyx-pypi-compromised-teampcp-...</a>
Show HN: I put an AI agent on a $7/month VPS with IRC as its transport layer
Show HN (score: 335)[Other] Show HN: I put an AI agent on a $7/month VPS with IRC as its transport layer The stack: two agents on separate boxes. The public one (nullclaw) is a 678 KB Zig binary using ~1 MB RAM, connected to an Ergo IRC server. Visitors talk to it via a gamja web client embedded in my site. The private one (ironclaw) handles email and scheduling, reachable only over Tailscale via Google's A2A protocol.<p>Tiered inference: Haiku 4.5 for conversation (sub-second, cheap), Sonnet 4.6 for tool use (only when needed). Hard cap at $2/day.<p>A2A passthrough: the private-side agent borrows the gateway's own inference pipeline, so there's one API key and one billing relationship regardless of who initiated the request.<p>You can talk to nully at <a href="https://georgelarson.me/chat/" rel="nofollow">https://georgelarson.me/chat/</a> or connect with any IRC client to irc.georgelarson.me:6697 (TLS), channel #lobby.
Show HN: Fio: 3D World editor/game engine – inspired by Radiant and Hammer
Hacker News (score: 45)[Other] Show HN: Fio: 3D World editor/game engine – inspired by Radiant and Hammer A liminal brush-based CSG editor and game engine with unified (forward) renderer inspired by Radiant and Worldcraft/Hammer<p>Compact and lightweight (target: Snapdragon 8CX, OpenGL 3.3)<p>Real-time lighting with stencil shadows without the need for pre-baked compilation
Show HN: Layerleak – Like Trufflehog, but for Docker Hub
Show HN (score: 5)[Other] Show HN: Layerleak – Like Trufflehog, but for Docker Hub
Show HN: Turbolite – a SQLite VFS serving sub-250ms cold JOIN queries from S3
Hacker News (score: 45)[Database] Show HN: Turbolite – a SQLite VFS serving sub-250ms cold JOIN queries from S3 I built a SQLite VFS in Rust that serves cold queries directly from S3 with sub-second performance, and often much faster.<p>It’s called turbolite. It is experimental, buggy, and may corrupt data. I would not trust it with anything important yet.<p>I wanted to explore whether object storage has gotten fast enough to support embedded databases over cloud storage. Filesystems reward tiny random reads and in-place mutation. S3 rewards fewer requests, bigger transfers, immutable objects, and aggressively parallel operations where bandwidth is often the real constraint. This was explicitly inspired by turbopuffer’s ground-up S3-native design. <a href="https://turbopuffer.com/blog/turbopuffer" rel="nofollow">https://turbopuffer.com/blog/turbopuffer</a><p>The use case I had in mind is lots of mostly-cold SQLite databases (database-per-tenant, database-per-session, or database-per-user architectures) where keeping a separate attached volume for inactive database feels wasteful. turbolite assumes a single write source and is aimed much more at “many databases with bursty cold reads” than “one hot database.”<p>Instead of doing naive page-at-a-time reads from a raw SQLite file, turbolite introspects SQLite B-trees, stores related pages together in compressed page groups, and keeps a manifest that is the source of truth for where every page lives. Cache misses use seekable zstd frames and S3 range GETs for search queries, so fetching one needed page does not require downloading an entire object.<p>At query time, turbolite can also pass storage operations from the query plan down to the VFS to frontrun downloads for indexes and large scans in the order they will be accessed.<p>You can tune how aggressively turbolite prefetches. For point queries and small joins, it can stay conservative and avoid prefetching whole tables. For scans, it can get much more aggressive.<p>It also groups pages by page type in S3. Interior B-tree pages are bundled separately and loaded eagerly. Index pages prefetch aggressively. Data pages are stored by table. The goal is to make cold point queries and joins decent, while making scans less awful than naive remote paging would.<p>On a 1M-row / 1.5GB benchmark on EC2 + S3 Express, I’m seeing results like sub-100ms cold point lookups, sub-200ms cold 5-join profile queries, and sub-600ms scans from an empty cache with a 1.5GB database. It’s somewhat slower on normal S3/Tigris.<p>Current limitations are pretty straightforward: it’s single-writer only, and it is still very much a systems experiment rather than production infrastructure.<p>I’d love feedback from people who’ve worked on SQLite-over-network, storage engines, VFSes, or object-storage-backed databases. I’m especially interested in whether the B-tree-aware grouping / manifest / seekable-range-GET direction feels like the right one to keep pushing.
Taming LLMs: Using Executable Oracles to Prevent Bad Code
Hacker News (score: 31)[Other] Taming LLMs: Using Executable Oracles to Prevent Bad Code
$500 GPU outperforms Claude Sonnet on coding benchmarks
Hacker News (score: 31)[Other] $500 GPU outperforms Claude Sonnet on coding benchmarks
Stripe Projects: Provision and manage services from the CLI
Hacker News (score: 71)[CLI Tool] Stripe Projects: Provision and manage services from the CLI
The RISE RISC-V Runners: free, native RISC-V CI on GitHub
Hacker News (score: 76)[Build/Deploy] The RISE RISC-V Runners: free, native RISC-V CI on GitHub
Moving from GitHub to Codeberg, for lazy people
Hacker News (score: 501)[Other] Moving from GitHub to Codeberg, for lazy people
Show HN: Paseo – Open-source coding agent interface (desktop, mobile, CLI)
Show HN (score: 7)[Other] Show HN: Paseo – Open-source coding agent interface (desktop, mobile, CLI) Hey HN, I'm Mo. I'm building Paseo, a multi-platform interface for running Claude Code, Codex and OpenCode. The daemon runs on any machine (your Macbook, a VPS, whatever) and clients (web, mobile, desktop, CLI) connect over WebSocket (there's a built-in E2EE relay for convenience, but you can opt-out).<p>I started working on Paseo last September as a push-to-talk voice interface for Claude Code. I wanted to bounce ideas hands-free while going on walks, after a while I wanted to see what the agent was doing, then I wanted to text it when I couldn't talk, then I wanted to see diffs and run multiple agents. I kept fixing rough edges and adding features, and slowly it became what it is today.<p>What it does:<p>- Run multiple providers through the same UI<p>- Works on macOS, Linux, Windows, iOS, Android, and web<p>- Manage agents in different machines from the same UI<p>- E2EE Relay for mobile connectivity<p>- Local voice chat and dictation (NVIDIA Parakeet + Kokoro + Sherpa ONNX)<p>- Split panes to work with agents, files and terminals side by side<p>- Git panel to review diffs and do common actions (commit, push, create PR etc.)<p>- Git worktree management so agents don't step on each other<p>- Docker-style CLI to run agents<p>- No telemetry, no tracking, no login<p>Paseo does not call inference APIs directly or extract your OAuth tokens. It wraps your first-party agent CLIs and runs them exactly as you would in your terminal. Your sessions, your system prompts, your tools, nothing is intercepted or modified.<p>Stack: The daemon is written in Typescript. The app uses Expo and compiles to both native mobile apps and web. The desktop app is in Electron (I started with Tauri and had to migrate). Sharing the same codebase across different form factors was challenging but I'd say that with discipline it's doable an the result has been worth it, as most features I build automatically work in all clients. I did have to implement some platform specific stuff, especially around gestures, audio and scroll behavior. The relay is built on top of Cloudflare DO, so far it's holding up quite well.<p>I love using the app, but I am even more excited about the possibilities of the CLI, as it become a primitive for more advanced agent orchestration, it has much better ergonomics than existing harnesses, and I'm already using it to experiment with loops and agent teams, although it's still new territory.<p>How Paseo compares to similar apps: Anthropic and OpenAI already do some of what Paseo does (Claude Code Remote Control, Codex app, etc.), but with mixed quality and you're locked onto their models. Most other alternatives I know about found are either closed source or not flexible enough for my needs.<p>The license is AGPL-3.0. The desktop app ships with a daemon so that's all you need. But you can also `npm install -g @getpaseo/cli` for headless mode and connect via any client.<p>I mainly use Mac, so Linux and Windows has mostly been tested by a small group of early adopters. If you run into issues, I’d appreciate bug reports on GitHub!<p>Repo: <a href="https://github.com/getpaseo/paseo" rel="nofollow">https://github.com/getpaseo/paseo</a><p>Homepage: <a href="https://paseo.sh/" rel="nofollow">https://paseo.sh/</a><p>Discord: <a href="https://discord.gg/jz8T2uahpH" rel="nofollow">https://discord.gg/jz8T2uahpH</a><p>Happy to answer questions about the product, architecture or whatever else!<p>---<p>I resubmitted this post because I forgot to add the URL and it didn't allow me to add it later.
Show HN: Veil – Dark mode PDFs without destroying images, runs in the browser
Hacker News (score: 52)[Other] Show HN: Veil – Dark mode PDFs without destroying images, runs in the browser Hi HN! here's a tool I just deployed that renders PDFs in dark mode without destroying the images. Internal and external links stay intact, and I decided to implement export since I'm not a fan of platform lock-in: you can view your dark PDF in your preferred reader, on any device. It's a side project born from a personal need first and foremost. When I was reading in the factory the books that eventually helped me get out of it, I had the problem that many study materials and books contained images and charts that forced me, with the dark readers available at the time, to always keep the original file in multitasking since the images became, to put it mildly, strange. I hope it can help some of you who have this same need. I think it could be very useful for researchers, but only future adoption will tell.<p>With that premise, I'd like to share the choices that made all of this possible. To do so, I'll walk through the three layers that veil creates from the original PDF:<p>- Layer 1: CSS filter. I use invert(0.86) hue rotate(180deg) on the main canvas. I use 0.86 instead of 1.0 because I found that full inversion produces a pure black and pure white that are too aggressive for prolonged reading. 0.86 yields a soft dark grey (around #242424, though it depends on the document's white) and a muted white (around #DBDBDB) for the text, which I found to be the most comfortable value for hours of reading.<p>- Layer 2: image protection. A second canvas is positioned on top of the first, this time with no filters. Through PDF.js's public API getOperatorList(), I walk the PDF's operator list and reconstruct the CTM stack, that is the save, restore and transform operations the PDF uses to position every object on the page. When I encounter a paintImageXObject (opcode 85 in PDF.js v5), the current transformation matrix gives me the exact bounds of the image. At that point I copy those pixels from a clean render onto the overlay. I didn't fork PDF.js because It would have become a maintenance nightmare given the length of the codebase and the frequent updates. Images also receive OCR treatment: text contained in charts and images becomes selectable, just like any other text on the page. At this point we have the text inverted and the images intact. But what if the page is already dark? Maybe the chapter title pages are black with white text? The next layer takes care of that.<p>- Layer 3: already-dark page detection. After rendering, the background brightness is measured by sampling the edges and corners of the page (where you're most likely to find pure background, without text or images in the way). The BT.601 formula is used to calculate perceived brightness by weighting the three color channels as the human eye sees them: green at 58.7%, red at 29.9%, blue at 11.4%. These weights reflect biology: the eye evolved in natural environments where distinguishing shades of green (vegetation, predators in the grass) was a matter of survival, while blue (sky, water) was less critical. If the average luminance falls below 40%, the page is flagged as already dark and the inversion is skipped, returning the original page. Presentation slides with dark backgrounds stay exactly as they are, instead of being inverted into something blinding.<p>Scanned documents are detected automatically and receive OCR via Tesseract.js, making text selectable and copyable even on PDFs that are essentially images. Everything runs locally, no framework was used, just vanilla JS, which is why it's an installable PWA that works offline too.<p>Here's the link to the app along with the repository: <a href="https://veil.simoneamico.com" rel="nofollow">https://veil.simoneamico.com</a> | <a href="https://github.com/simoneamico-ux-dev/veil" rel="nofollow">https://github.com/simoneamico-ux-dev/veil</a><p>I hope veil can make your reading more pleasant. I'm open to any feedback. Thanks everyone